List of Flash News about DeepSeek V3.2
| Time | Details |
|---|---|
|
2025-10-22 04:00 |
DeepSeek v3.2 685B MoE Cuts AI Inference Costs 6–7x and Speeds Long-Context 2–3x; MIT-Licensed and Huawei-Optimized — Trading Takeaways for AI Infrastructure
According to @DeepLearningAI, DeepSeek’s new 685B MoE v3.2 attends only to the most relevant tokens and delivers 2–3x faster long-context inference versus v3.1. Source: @DeepLearningAI, Oct 22, 2025. According to @DeepLearningAI, processing is 6–7x cheaper than v3.1 and the API is priced at $0.28/$0.028/$0.42 per 1M input/cached/output tokens. Source: @DeepLearningAI, Oct 22, 2025. According to @DeepLearningAI, the model weights are MIT-licensed and optimized for Huawei and other China chips, enabling broader deployment options in China-based compute. Source: @DeepLearningAI, Oct 22, 2025. According to @DeepLearningAI, performance is broadly similar to v3.1 with small gains on coding and agentic tasks and slight dips on some science and math benchmarks. Source: @DeepLearningAI, Oct 22, 2025. According to @DeepLearningAI, these disclosed cost and latency metrics provide a concrete benchmark traders can use to track pricing pressure and efficiency trends across AI infrastructure, decentralized compute, and on-chain agent tooling sectors. Source: @DeepLearningAI, Oct 22, 2025. |